Distributed Gradient Flow: Nonsmoothness, Nonconvexity, and Saddle Point Evasion
نویسندگان
چکیده
The article considers distributed gradient flow (DGF) for multiagent nonconvex optimization. DGF is a continuous-time approximation of descent that often easier to study than its discrete-time counterpart. has two main contributions. First, the optimization nonsmooth, objective functions. It shown converges critical points in this setting. then problem avoiding saddle points. if agents’ functions are assumed be smooth and nonconvex, can only converge point from zero-measure set initial conditions. To establish result, proves stable manifold theorem DGF, which fundamental contribution independent interest. In companion article, analogous results derived algorithms.
منابع مشابه
Biased gradient squared descent saddle point finding method.
The harmonic approximation to transition state theory simplifies the problem of calculating a chemical reaction rate to identifying relevant low energy saddle points in a chemical system. Here, we present a saddle point finding method which does not require knowledge of specific product states. In the method, the potential energy landscape is transformed into the square of the gradient, which c...
متن کاملMultilevel Gradient Uzawa Algorithms for Symmetric Saddle Point Problems
In this paper, we introduce a general multilevel gradient Uzawa algorithm for symmetric saddle point systems. We compare its performance with the performance of the standard Uzawa multilevel algorithm. The main idea of the approach is to combine a double inexact Uzawa algorithm at the continuous level with a gradient type algorithm at the discrete level. The algorithm is based on the existence ...
متن کاملOn nonsymmetric saddle point matrices that allow conjugate gradient iterations
Linear systems in saddle point form are often symmetric and highly indefinite. Indefiniteness, however, is a major challenge for iterative solvers such as Krylov subspace methods. It has been noted by several authors that a simple trick, namely negating the second block row of the saddle point system, leads to an equivalent linear system with a nonsymmetric coefficient matrix A whose spectrum i...
متن کاملAdaptive Primal-dual Hybrid Gradient Methods for Saddle-point Problems
The Primal-Dual hybrid gradient (PDHG) method is a powerful optimization scheme that breaks complex problems into simple sub-steps. Unfortunately, PDHG methods require the user to choose stepsize parameters, and the speed of convergence is highly sensitive to this choice. We introduce new adaptive PDHG schemes that automatically tune the stepsize parameters for fast convergence without user inp...
متن کاملSVM via Saddle Point Optimization: New Bounds and Distributed Algorithms
Support Vector Machine is one of the most classical approaches for classification and regression. Despite being studied for decades, obtaining practical algorithms for SVM is still an active research problem in machine learning. In this paper, we propose a new perspective for SVM via saddle point optimization. We provide an algorithm which achieves (1 − )-approximations with running time Õ(nd +...
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
ژورنال
عنوان ژورنال: IEEE Transactions on Automatic Control
سال: 2022
ISSN: ['0018-9286', '1558-2523', '2334-3303']
DOI: https://doi.org/10.1109/tac.2021.3111853